Emotional prosody or affective prosody is the various paralanguage aspects of language use that convey emotion. It includes an individual's tone of voice in speech that is conveyed through changes in pitch, loudness, timbre, speech rate, and pausa. It can be isolated from Semantics information, and interacts with verbal content (e.g. sarcasm).
Emotional prosody in speech is perceived or decoded slightly worse than facial expressions but accuracy varies with emotions. Anger and sadness are perceived most easily, followed by fear and happiness, with disgust being the most poorly perceived.
On average, listeners are able to perceive intended emotions exhibited to them at a rate significantly better than chance (chance=approximately 10%). However, error rates are also high. This is partly due to the observation that listeners are more accurate at emotional inference from particular voices and perceive some emotions better than others. Vocal expressions of anger and sadness are perceived most easily, fear and happiness are only moderately well-perceived, and disgust has low perceptibility.
Neurological processes integrating verbal and vocal (prosodic) components are relatively unclear. However, it is assumed that verbal content and vocal are processed in different hemispheres of the brain. Verbal content composed of syntactic and semantic information is processed in the left hemisphere. Syntactic information is processed primarily in the frontal regions and a small part of the temporal lobe of the brain while semantic information is processed primarily in the temporal regions with a smaller part of the frontal lobes incorporated. In contrast, prosody is processed primarily in the same pathway as verbal content, but in the right hemisphere. Neuroimaging studies using functional magnetic resonance imaging (fMRI) machines provide further support for this hemisphere lateralization and temporo-frontal activation. Some studies however show evidence that prosody perception is not exclusively lateralized to the right hemisphere and may be more bilateral. There is some evidence that the basal ganglia may also play an important role in the perception of prosody.
It has been found that it gets increasingly difficult to recognize vocal expressions of emotion with increasing age. Older adults have slightly more difficulty labeling vocal expressions of emotion, particularly sadness and anger than young adults but have a much greater difficulty integrating vocal emotions and corresponding facial expressions. A possible explanation for this difficulty is that combining two sources of emotion requires greater activation of emotion areas of the brain, in which adults show decreased volume and activity. Another possible explanation is that hearing loss could have led to a mishearing of vocal expressions. High frequency hearing loss is known to begin occurring around the age of 50, particularly in men.
Because the right hemisphere of the brain is associated with prosody, patients with right hemisphere lesions have difficulty varying speech patterns to convey emotion. Their speech may therefore sound monotonous. In addition, people with right-hemisphere damage have been studied to be impaired when it comes to identifying the emotion in intoned sentences.
Difficulty in decoding both syntactic and affective prosody is also found in people with autism spectrum disorder and schizophrenia, where "patients have deficits in a large number of functional domains, including and social cognition. These social impairments consist of difficulties in perceiving, understanding, anticipating and reacting to social cues that are crucial for normal social interaction." This has been determined in multiple studies, such as Hoekert et al.'s 2017 study on emotional prosody in schizophrenia, which illustrated that more research must be done to fully confirm the correlation between the illness and emotional prosody. However, people with schizophrenia have no problem deciphering non-emotional prosody.
In addition, it has been proven that emotion can be expressed in non-linguistic vocalizations differently than in speech. As Laukka et al. state: Speech requires highly precise and coordinated movement of the articulators (e.g., lips, tongue, and larynx) in order to transmit linguistic information, whereas non-linguistic vocalizations are not constrained by linguistic codes and thus do not require such precise articulations. This entails that non-linguistic vocalizations can exhibit larger ranges for many acoustic features than prosodic expressions.
In their study, actors were instructed to vocalize an array of different emotions without words. The study showed that listeners could identify a wide range of positive and negative emotions above chance. However, emotions like guilt and pride were less easily recognized.
In a 2015 study by Verena Kersken, Klaus Zuberbühler and Juan-Carlos Gomez, non-linguistic vocalizations of infants were presented to adults to see if the adults could distinguish from infant vocalizations indicating requests for help, pointing to an object, or indicating an event. Infants show different prosodic elements in crying, depending on what they are crying for. They also have differing outbursts for positive and negative emotional states. Decipherment ability of this information was determined to be applicable across cultures and independent of the adult's level of experience with infants.
Women and men are also different in how they neurologically process emotional prosody. In an fMRI study, men showed a stronger activation in more cortical areas than female subjects when processing the meaning or manner of an emotional phrase. In the manner task, men had more activation in the bilateral middle temporal gyri. For women, the only area of significance was the posterior lobe. Male subjects in this study showed stronger activation in the prefrontal cortex, and on average needed a longer response time than female subjects. This result was interpreted to mean that men need to make conscious inferences about the acts and intentions of the speaker, while women may do this sub-consciously. Therefore, men needed to integrate linguistic semantics and emotional intent "at a higher stage than the semantic processing stage."
|
|